This section will serve as a first pass at using some methods in the literature to aggregate metrics. I should say at the start that we have a pretty narrow selection of metrics to work with so far, which do not do a great job at capturing the breadth of the dimension. I’m also working with just the county-level data here. This provides some opportunities to use data-driven analyses like PCA, but it is worth noting that these will not get us to the holistic, system-wide measurements of sustainability we are after without including some normative judgments as to how to combine geographic areas as well as our five dimensions. So, let’s just go through the motions here, see how the process unfolds, and note anything worth digging into more down the road.
1 Imputation
PCA requires complete data, so we either have to impute, delete, or use PPCA. I’m choosing to impute with missing forest here as it is pretty good at handling MAR and non-linear data, but PPCA is certainly worth exploring.
Code
pacman::p_load( missForest, tibble)source('dev/filter_fips.R')env_county <-readRDS('data/temp/env_county.rds')# Wrangle dataset. Need all numeric vars or factor vars. And can't be tibble# Also removing character vars - can't use these in PCA# Using old Connecticut counties - some lulc data is missing for them thoughdat <- env_county %>%filter_fips('old') %>%select(fips, where(is.numeric)) %>%column_to_rownames('fips') %>%as.data.frame()# get_str(dat)# skimr::skim(dat)# Remove variables with most missing data - too much to impute.# Also remove the proportional LULC values - keeping diversity thoughdat <- dat %>%select(-matches('consIncome'), -matches('^lulcProp'))# Impute missing variablesset.seed(42)mf_out <- dat %>%missForest(ntree =200,mtry =10,verbose =FALSE,variablewise =FALSE )# Save imputed datasetimp <- mf_out$ximp# Print OOBmf_out$OOBerror
NRMSE
0.001503807
2 Standardization
Centering and scaling to give every variable a mean of 0 and SD of 1.
Code
dat <-map_dfc(imp, ~scale(.x, center =TRUE, scale =TRUE))
Now that we have standardized variables, we have to make normative decisions about what constitutes a good or bad value. This will certainly be a collaborative process where we seek input from teams to come to some kind of consensus once we have primary data. But until then, I’m going to make some heroic assumptions that LULC diversity is good, above ground forest biomass is good, conservation practices and easements are good, and fertilizer expenses are bad. Open to thoughts here as always.
With that, we can recode our normalized variables accordingly.
Code
normed <- dat %>%mutate(across(c(matches('^fert')), ~-.x))
3 Component Extraction
Determine the number of components to extract using a few tools: very simple structure (VSS), Velicer’s minimum average partial (MAP) test, parallel analysis, and a scree plot.
This scree plot shows the eigenvalues (unit variance explained) of each principal component (y-axis) against each component (x-axis). The first few components explain lots of variance, but there is a decent elbow around the fourth component.
VSS suggests 1 or 2, MAP suggests 8, parallel analysis shows 3. I’m going with 3 here, which will be explained further below.
Recommendations for creating composite indices are to extract components that each have eigenvalues > 1, explained variance > 0.10, and such that the proportion of explained variance for the total set is > 0.60 (Nicoletti 2000; OECD 2008).
Our total cumulative variance is explained is 0.74, and our component that explains the least variance is RC4 with 0.11. Note that extracting four or more components here gives us a component with less than 0.10, so this is why we are sticking to three. The first component (RC1) explains 38% of the variance in the data. The second component is respectable at 0.26, while the third is barely above the threshold at 0.11.
Looking at the metrics, we can see that the first component loads mostly onto the conservation practices, no-till acres, cover cropping, drainage, and total fertilizer expenses. The second component leads onto mean above-ground biomass (although there is cross-loading with the first component), operations with silvapasture, operations with easements, rotational grazing operations, and operations with fertilizer expenses. This seems to be catching more of the population-related metrics. The last component only loads onto a few metrics: easement acres, easement acres per farm, and silvapasture operations (which has some heavy cross-loading).
5 Aggregation
Here, we follow Nicoletti and calculate the normalized sum of square factor loadings, which represent the proportion of total unit variance of the metrics that is explained by the component.
Code
## Get metric weights following Nicoletti 2000# Pull out metric loadingsloadings <- pca_out$weights %>%as.data.frame()# For each set of loadings, get squares, and then normalized proportionssq_loadings <-map(loadings, ~ .x^2)metric_weights <-map(sq_loadings, ~ .x /sum(.x))head(as.data.frame(metric_weights))
Now we can use these to weight metrics and aggregate them into a component score for each county.
Code
# Component scores for each component across each countycomponent_scores <-map(metric_weights, \(x) {as.matrix(normed) %*% x}) %>%as.data.frame()head(component_scores)
It looks like they are reasonably similar, although RC2 and RC3 have substantially lower correlation coefficients. It will be worth noting this and coming back to explore the differences at some point.
For now, let’s keep following Nicoletti and aggregate the component scores into a single variable.
Curious that the component that accounted for the most variance is weighted the lowest. Worth doing a dive here at some point and figuring out why that is.
We will use these to weight each component to combine them.
Now that we have all three component scores and the dimension score, let’s take a look at a map. Select the data to display with the layer button on the left.